43 research outputs found

    modeling and simulation of spiking neural networks with resistive switching synapses

    Get PDF
    Artificial intelligence (AI) has recently reached excellent achievements in the implementation of human brain cognitive functions such as learning, recognition and inference by running intensively neural networks with deep learning on high-performance computing platforms. However, excessive computational time and power consumption required for achieving such performance make AI inefficient compared with human brain. To replicate the efficient operation of human brain in hardware, novel nanoscale memory devices such as resistive switching random access memory (RRAM) have attracted strong interest thanks to their ability to mimic biological learning in silico. In this chapter, design, modeling and simulation of RRAM-based electronic synapses capable of emulating biological learning rules are first presented. Then, the application of RRAM synapses in spiking neural networks to achieve neuromorphic tasks such as on-line learning of images and associative learning is addressed

    Analytical Modeling of Current Overshoot in Oxide-Based Resistive Switching Memory (RRAM)

    Get PDF
    Current overshoot due to parasitic capacitance during set transition represents a major concern for controlling the resistance and current consumption in resistive switching memory (RRAM) arrays. In this letter, the impact of current overshoot on the low-resistance state (LRS) is evaluated by means of experiments on one-transistor/one-resistor structures of HfO2 RRAM. We develop a physics-based analytical model, able to calculate the LRS resistance and the corresponding reset current by a closed-form formula. The model allows predicting the current overshoot impact for any value of compliance current, set voltage, and parasitic capacitance

    Memristive neural network for on-line learning and tracking with brain-inspired spike timing dependent plasticity

    Get PDF
    Brain-inspired computation can revolutionize information technology by introducing machines capable of recognizing patterns (images, speech, video) and interacting with the external world in a cognitive, humanlike way. Achieving this goal requires first to gain a detailed understanding of the brain operation, and second to identify a scalable microelectronic technology capable of reproducing some of the inherent functions of the human brain, such as the high synaptic connectivity (~104) and the peculiar time-dependent synaptic plasticity. Here we demonstrate unsupervised learning and tracking in a spiking neural network with memristive synapses, where synaptic weights are updated via brain-inspired spike timing dependent plasticity (STDP). The synaptic conductance is updated by the local time-dependent superposition of pre-and post-synaptic spikes within a hybrid one-transistor/one-resistor (1T1R) memristive synapse. Only 2 synaptic states, namely the low resistance state (LRS) and the high resistance state (HRS), are sufficient to learn and recognize patterns. Unsupervised learning of a static pattern and tracking of a dynamic pattern of up to 4 Ã\u97 4 pixels are demonstrated, paving the way for intelligent hardware technology with up-scaled memristive neural networks

    Unsupervised Learning by Spike Timing Dependent Plasticity in Phase Change Memory (PCM) Synapses

    Get PDF
    We present a novel one-transistor/one-resistor (1T1R) synapse for neuromorphic networks, based on phase change memory (PCM) technology. The synapse is capable of spike-timing dependent plasticity (STDP), where gradual potentiation relies on set transition, namely crystallization, in the PCM, while depression is achieved via reset or amorphization of a chalcogenide active volume. STDP characteristics are demonstrated by experiments under variable initial conditions and number of pulses. Finally, we support the applicability of the 1T1R synapse for learning and recognition of visual patterns by simulations of fully connected neuromorphic networks with 2 or 3 layers with high recognition efficiency. The proposed scheme provides a feasible low-power solution for on-line unsupervised machine learning in smart reconfigurable sensors

    Reliability of Logic-in-Memory Circuits in Resistive Memory Arrays

    Get PDF
    Producción CientíficaLogic-in-memory (LiM) circuits based on resistive random access memory (RRAM) devices and the material implication logic are promising candidates for the development of low-power computing devices that could fulfill the growing demand of distributed computing systems. However, these circuits are affected by many reliability challenges that arise from device nonidealities (e.g., variability) and the characteristics of the employed circuit architecture. Thus, an accurate investigation of the variability at the array level is needed to evaluate the reliability and performance of such circuit architectures. In this work, we explore the reliability and performance of smart IMPLY (SIMPLY) (i.e., a recently proposed LiM architecture with improved reliability and performance) on two 4-kb RRAM arrays based on different resistive switching oxides integrated in the back end of line (BEOL) of the 0.25-μm BiCMOS process. We analyze the tradeoff between reliability and energy consumption of SIMPLY architecture by exploiting the results of an extensive array-level variability characterization of the two technologies. Finally, we study the worst case performance of a full adder implemented with the SIMPLY architecture and benchmark it on the analogous CMOS implementation.European Union’s Horizon 2020 Research and Innovation Programme under Grant 64863

    Learning of spatiotemporal patterns in a spiking neural network with resistive switching synapses

    Get PDF
    The human brain is a complex integrated spatiotemporal system, where space (which neuron fires) and time (when a neuron fires) both carry information to be processed by cognitive functions. To parallel the energy efficiency and computing functionality of the brain, methodologies operating over both the space and time domains are thus essential. Implementing spatiotemporal functions within nanoscale devices capable of synaptic plasticity would contribute a significant step toward constructing a large-scale neuromorphic system that emulates the computing and energy performances of the human brain. We present a neuromorphic approach to brain-like spatiotemporal computing using resistive switching synapses. To process the spatiotemporal spike pattern, time-coded spikes are reshaped into exponentially decaying signals that are fed to a McCulloch-Pitts neuron. Recognition of spike sequences is demonstrated after supervised training of a multiple-neuron network with resistive switching synapses. Finally, we show that, due to the sensitivity to precise spike timing, the spatiotemporal neural network is able to mimic the sound azimuth detection of the human brain

    Stochastic Learning in Neuromorphic Hardware via Spike Timing Dependent Plasticity with RRAM Synapses

    Get PDF
    Hardware processors for neuromorphic computing are gaining significant interest as they offer the possibility of real in-memory computing, thus by-passing the limitations of speed and energy consumption of the von Neumann architecture. One of the major limitations of current neuromorphic technology is the lack of bio-realistic and scalable devices to improve the current design of artificial synapses and neurons. To overcome these limitations, the emerging technology of resistive switching memory has attracted wide interest as a nano-scaled synaptic element. This paper describes the implementation of a perceptron-like neuromorphic hardware capable of spike-timing dependent plasticity (STDP), and its operation under stochastic learning conditions. The learning algorithm of a single or multiple patterns, consisting of either static or dynamic visual input data, is described. The impact of noise is studied with respect to learning efficiency (false fire, true fire) and learning time. Finally, the impact of stochastic learning rule, such as the inversion of the time dependence of potentiation and depression in STDP, is considered. Overall, the work provides a proof of concept for unsupervised learning by STDP in memristive networks, providing insight into the dynamics of stochastic learning and supporting the understanding and design of neuromorphic networks with emerging memory devices

    Brain-Inspired Recurrent Neural Network with Plastic RRAM Synapses

    No full text
    Milo V, Chicca E, Ielmini D. Brain-Inspired Recurrent Neural Network with Plastic RRAM Synapses. In: 2018 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE; 2018: 1-5
    corecore